U-shaped networks are widely used in various medical image tasks, such as segmentation, restoration and reconstruction, but most of them usually rely on centralized learning and thus ignore privacy issues. To address the privacy concerns, federated learning (FL) and split learning (SL) have attracted increasing attention. However, it is hard for both FL and SL to balance the local computational cost, model privacy and parallel training simultaneously. To achieve this goal, in this paper, we propose Robust Split Federated Learning (RoS-FL) for U-shaped medical image networks, which is a novel hybrid learning paradigm of FL and SL. Previous works cannot preserve the data privacy, including the input, model parameters, label and output simultaneously. To effectively deal with all of them, we design a novel splitting method for U-shaped medical image networks, which splits the network into three parts hosted by different parties. Besides, the distributed learning methods usually suffer from a drift between local and global models caused by data heterogeneity. Based on this consideration, we propose a dynamic weight correction strategy (\textbf{DWCS}) to stabilize the training process and avoid model drift. Specifically, a weight correction loss is designed to quantify the drift between the models from two adjacent communication rounds. By minimizing this loss, a correction model is obtained. Then we treat the weighted sum of correction model and final round models as the result. The effectiveness of the proposed RoS-FL is supported by extensive experimental results on different tasks. Related codes will be released at https://github.com/Zi-YuanYang/RoS-FL.
translated by 谷歌翻译
It is a common sense that datasets with high-quality data samples play an important role in artificial intelligence (AI), machine learning (ML) and related studies. However, although AI/ML has been introduced in wireless researches long time ago, few datasets are commonly used in the research community. Without a common dataset, AI-based methods proposed for wireless systems are hard to compare with both the traditional baselines and even each other. The existing wireless AI researches usually rely on datasets generated based on statistical models or ray-tracing simulations with limited environments. The statistical data hinder the trained AI models from further fine-tuning for a specific scenario, and ray-tracing data with limited environments lower down the generalization capability of the trained AI models. In this paper, we present the Wireless AI Research Dataset (WAIR-D)1, which consists of two scenarios. Scenario 1 contains 10,000 environments with sparsely dropped user equipments (UEs), and Scenario 2 contains 100 environments with densely dropped UEs. The environments are randomly picked up from more than 40 cities in the real world map. The large volume of the data guarantees that the trained AI models enjoy good generalization capability, while fine-tuning can be easily carried out on a specific chosen environment. Moreover, both the wireless channels and the corresponding environmental information are provided in WAIR-D, so that extra-information-aided communication mechanism can be designed and evaluated. WAIR-D provides the researchers benchmarks to compare their different designs or reproduce results of others. In this paper, we show the detailed construction of this dataset and examples of using it.
translated by 谷歌翻译
Pavement Distress Recognition (PDR) is an important step in pavement inspection and can be powered by image-based automation to expedite the process and reduce labor costs. Pavement images are often in high-resolution with a low ratio of distressed to non-distressed areas. Advanced approaches leverage these properties via dividing images into patches and explore discriminative features in the scale space. However, these approaches usually suffer from information loss during image resizing and low efficiency due to complex learning frameworks. In this paper, we propose a novel and efficient method for PDR. A light network named the Kernel Inversed Pyramidal Resizing Network (KIPRN) is introduced for image resizing, and can be flexibly plugged into the image classification network as a pre-network to exploit resolution and scale information. In KIPRN, pyramidal convolution and kernel inversed convolution are specifically designed to mine discriminative information across different feature granularities and scales. The mined information is passed along to the resized images to yield an informative image pyramid to assist the image classification network for PDR. We applied our method to three well-known Convolutional Neural Networks (CNNs), and conducted an evaluation on a large-scale pavement image dataset named CQU-BPDD. Extensive results demonstrate that KIPRN can generally improve the pavement distress recognition of these CNN models and show that the simple combination of KIPRN and EfficientNet-B3 significantly outperforms the state-of-the-art patch-based method in both performance and efficiency.
translated by 谷歌翻译
自动路面遇险分类有助于提高路面维护的效率并降低劳动力和资源的成本。该任务的最近有影响力的分支将路面图像划分为贴片,并从多实体学习的角度解决了这些问题。但是,这些方法忽略了斑块之间的相关性,并且在模型优化和推理中遇到了低效率。同时,Swin Transformer能够以其独特的优势来解决这两个问题。我们构建了Swin Transformer,我们提供了一个名为\ TextBf {p} avement \ textbf {i} mage \ textbf {c} lassification \ textbf {t} ransformer(\ textbf {pict})的视觉变压器。为了更好地利用贴片级别的路面图像的判别信息,提出了\ textit {patch labeling conterg},以利用教师模型在每次迭代期间从图像标签中动态生成贴片的伪标签,并将模型引导到模型上了解补丁的判别特征。 Swin Transformer的广泛分类头可能会稀释特征聚合步骤中遇险斑块的判别特征,这是由于路面图像的遇险面积较小。为了克服这个缺点,我们提出了一个\ textit {Patch Refiner}将补丁聚集到不同的组中,并且仅选择最高的遇险风险组来产生最终图像分类的纤细头部。我们在CQU-BPDD上评估了我们的方法。广泛的结果表明,\ textbf {pict}在检测任务中,p@r中的$+2.4 \%$的大幅度优于第二好的模型,$+3.9 \%\%\%$ f1 $ f1 $ in识别任务和识别任务和1.8倍吞吐量,同时使用相同的计算资源享受7倍的训练速度。我们的代码和模型已在\ href {https://github.com/dearcaat/pict} {https://github.com/dearcaat/pict}上发布。
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
透明对象对视觉感知系统提出了多个不同的挑战。首先,他们缺乏区分视觉特征使透明对象比不透明的对象更难检测和本地化。即使人类也发现某些透明的表面几乎没有镜面反射或折射,例如玻璃门,难以感知。第二个挑战是,通常用于不透明对象感知的常见深度传感器由于其独特的反射特性而无法对透明对象进行准确的深度测量。由于这些挑战,我们观察到,同一类别(例如杯子)内的透明对象实例看起来与彼此相似,而不是同一类别的普通不透明对象。鉴于此观察结果,本文着手探讨类别级透明对象姿势估计的可能性,而不是实例级姿势估计。我们提出了TransNet,这是一种两阶段的管道,该管道学会使用局部深度完成和表面正常估计来估计类别级别的透明对象姿势。在最近的大规模透明对象数据集中,根据姿势估计精度评估了TransNet,并将其与最先进的类别级别姿势估计方法进行了比较。该比较的结果表明,TransNet可以提高透明对象的姿势估计准确性,并从随附的消融研究中提高了关键发现,这表明未来的方向改善了绩效。
translated by 谷歌翻译
透明的物体在家庭环境中无处不在,并且对视觉传感和感知系统构成了不同的挑战。透明物体的光学特性使常规的3D传感器仅对物体深度和姿势估计不可靠。这些挑战是由重点关注现实世界中透明对象的大规模RGB深度数据集突出了这些挑战。在这项工作中,我们为名为ClearPose的大规模现实世界RGB深度透明对象数据集提供了一个用于分割,场景级深度完成和以对象为中心的姿势估计任务的基准数据集。 ClearPose数据集包含超过350K标记的现实世界RGB深度框架和5M实例注释,涵盖了63个家用对象。该数据集包括在各种照明和遮挡条件下在日常生活中常用的对象类别,以及具有挑战性的测试场景,例如不透明或半透明物体的遮挡病例,非平面取向,液体的存在等。 - 艺术深度完成和对象构成清晰度上的深神经网络。数据集和基准源代码可在https://github.com/opipari/clearpose上获得。
translated by 谷歌翻译
视觉感知任务通常需要大量的标记数据,包括3D姿势和图像空间分割掩码。创建此类培训数据集的过程可能很难或耗时,可以扩展到一般使用的功效。考虑对刚性对象的姿势估计的任务。在大型公共数据集中接受培训时,基于神经网络的深层方法表现出良好的性能。但是,将这些网络调整为其他新颖对象,或针对不同环境的现有模型进行微调,需要大量的时间投资才能产生新标记的实例。为此,我们提出了ProgressLabeller作为一种方法,以更有效地以可扩展的方式从彩色图像序列中生成大量的6D姿势训练数据。 ProgressLabeller还旨在支持透明或半透明的对象,以深度密集重建的先前方法将失败。我们通过快速创建一个超过1M样品的数据集来证明ProgressLabeller的有效性,我们将其微调一个最先进的姿势估计网络,以显着提高下游机器人的抓地力。 ProgressLabeller是https://github.com/huijiezh/progresslabeller的开放源代码。
translated by 谷歌翻译
热红外(TIR)图像在为多光谱行人检测提供温度提示时已经证明了有效性。大多数现有方法直接将TIR模型注入基于RGB的框架或简单地集合两个模态的结果。然而,这可能导致较差的检测性能,因为RGB和TIR特征通常具有模态特定的噪声,这可能与网络的传播一起恶化。因此,这项工作提出了一种称为双向自适应注意栅极(BAA门)的有效和高效的跨型号融合模块。基于注意机制,设计了BAA门以蒸馏出信息特征,并重新校验渐近的表示。具体地,采用双向多阶段融合策略来逐步优化两种方式的特征,并在传播期间保持其特异性。此外,通过基于照明的权重策略引入了BAA栅极的自适应相互作用,以便于在BAA栅极中自适应地调整重新校准和聚集强度,并增强稳健性对照明变化。关于挑战性的Kaist DataSet的相当大的实验证明了我们对令人满意的速度的卓越性能。
translated by 谷歌翻译
如何提取重要点云特征并估计它们之间的姿势仍然是一个具有挑战性的问题,因为点云的固有缺乏结构和暧昧的顺序排列。尽管对大多数3D计算机视觉任务的基于深度学习的方法进行了重大改进,例如对象分类,对象分割和点云注册,但功能之间的一致性在现有的基于学习的流水线上仍然没有吸引力。在本文中,我们提出了一种用于复杂对准场景的新型学习的对齐网络,标题为深度特征一致性,并由三个主模块组成:多尺度图形特征合并网络,用于将几何对应集转换为高维特征,对应加权用于构建多个候选内部子集的模块,以及命名为深度特征匹配的Procrustes方法,用于给出闭合方案来估计相对姿势。作为深度特征匹配模块的最重要步骤,构造每个Inlier子集的特征一致性矩阵以获得其主要向量作为相应子集的含义似然性。我们全面地验证了我们在3DMATCH数据集和基提ODOMOTRY数据集中的方法的鲁棒性和有效性。对于大型室内场景,3DMATCH数据集上的注册结果表明,我们的方法优于最先进的传统和基于学习的方法。对于Kitti户外场景,我们的方法仍然能够降低转换错误。我们还在交叉数据集中探讨其强大的泛化能力。
translated by 谷歌翻译